998 research outputs found

    The parallel architecture

    Get PDF
    Theoretical and Experimental Linguistic

    Relational morphology: a cousin of construction grammar

    Get PDF
    Theoretical and Experimental Linguistic

    Animacy effects on the processing of intransitive verbs:An eye-tracking study

    Get PDF
    <p>This paper tested an assumption of the gradient model of split intransitivity put forward by Sorace (“Split Intransitivity Hierarchy” (SIH), 2000, 2004), namely that agentivity is a fundamental feature for unergatives but not for unaccusatives. According to this hypothesis, the animacy of the verb’s argument should affect the processing of unergative verbs to a greater extent than unaccusative verbs. By using eye-tracking methodology we monitored the online processing and integration costs of the animacy of the verb’s argument in intransitive verbs. We observed that inanimate subjects caused longer reading times only for unergative verbs, whereas the animacy of the verb’s argument did not influence the pattern of results for unaccusatives. In addition, the unergative verb data directly support the existence of gradient effects on the processing of the subject argument.</p

    WordNet for Italian and its use for lexical discrimination

    Full text link

    Artificial Brains and Hybrid Minds

    Get PDF
    The paper develops two related thought experiments exploring variations on an ‘animat’ theme. Animats are hybrid devices with both artificial and biological components. Traditionally, ‘components’ have been construed in concrete terms, as physical parts or constituent material structures. Many fascinating issues arise within this context of hybrid physical organization. However, within the context of functional/computational theories of mentality, demarcations based purely on material structure are unduly narrow. It is abstract functional structure which does the key work in characterizing the respective ‘components’ of thinking systems, while the ‘stuff’ of material implementation is of secondary importance. Thus the paper extends the received animat paradigm, and investigates some intriguing consequences of expanding the conception of bio-machine hybrids to include abstract functional and semantic structure. In particular, the thought experiments consider cases of mind-machine merger where there is no physical Brain-Machine Interface: indeed, the material human body and brain have been removed from the picture altogether. The first experiment illustrates some intrinsic theoretical difficulties in attempting to replicate the human mind in an alternative material medium, while the second reveals some deep conceptual problems in attempting to create a form of truly Artificial General Intelligence

    Two kinds of procedural semantics for privative modification

    Get PDF
    In this paper we present two kinds of procedural semantics for privative modification. We do this for three reasons. The first reason is to launch a tough test case to gauge the degree of substantial agreement between a constructivist and a realist interpretation of procedural semantics; the second is to extend Martin-L ̈f’s Constructive Type Theory to privative modification, which is characteristic of natural language; the third reason is to sketch a positive characterization of privation

    Humans store about 1.5 megabytes of information during language acquisition

    Get PDF
    We introduce theory-neutral estimates of the amount of information learners possess about how language works. We provide estimates at several levels of linguistic analysis: phonemes, wordforms, lexical semantics, word frequency and syntax. Our best guess is that the average English-speaking adult has learned 12.5 million bits of information, the majority of which is lexical semantics. Interestingly, very little of this information is syntactic, even in our upper bound analyses. Generally, our results suggest that learners possess remarkable inferential mechanisms capable of extracting, on average, nearly 2000 bits of information about how language works each day for 18 years

    Exploiting Lexical Conceptual Structure for paraphrase generation

    Get PDF
    Abstract. Lexical Conceptual Structure (LCS) represents verbs as semantic structures with a limited number of semantic predicates. This paper attempts to exploit how LCS can be used to explain the regularities underlying lexical and syntactic paraphrases, such as verb alternation, compound word decomposition, and lexical derivation. We propose a paraphrase generation model which transforms LCSs of verbs, and then conduct an empirical experiment taking the paraphrasing of Japanese light-verb constructions as an example. Experimental results justify that syntactic and semantic properties of verbs encoded in LCS are useful to semantically constrain the syntactic transformation in paraphrase generation.
    • 

    corecore